skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Rao, Bhaskar D"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Covariance matrix reconstruction has been the most widely used guiding objective in gridless direction-of-arrival (DoA) estimation for sparse linear arrays. Many semidefinite programming (SDP)-based methods fall under this category. Although deep learning-based approaches enable the construction of more sophisticated objective functions, most methods still rely on covariance matrix reconstruction. In this paper, we propose new loss functions that are invariant to the scaling of the matrices and provide a comparative study of losses with varying degrees of invariance. The proposed loss functions are formulated based on the scale-invariant signal-to-distortion ratio between the target matrix and the Gram matrix of the prediction. Numerical results show that a scale-invariant loss outperforms its non-invariant counterpart but is inferior to the recently proposed subspace loss that is invariant to the change of basis. These results provide evidence that designing loss functions with greater degrees of invariance is advantageous in deep learning-based gridless DoA estimation. 
    more » « less
    Free, publicly-accessible full text available April 6, 2026
  2. This paper introduces new and practically relevant non-Gaussian priors for the Sparse Bayesian Learning (SBL) framework applied to the Multiple Measurement Vector (MMV) problem. We extend the Gaussian Scale Mixture (GSM) framework to model prior distributions for row vectors, exploring the use of shared and different hyperparameters across different measurements. We propose Expectation Maximization (EM) based algorithms to estimate the parameters of the prior density along with the hyperparameters. To promote sparsity more effectively in a non-Gaussian setting, we show the importance of incorporating learning of the parameters of the mixing density. Such an approach effectively utilizes the common support notion in the MMV problem and promotes sparsity without explicitly imposing a sparsity-promoting prior, indicating the methods’ robustness to model mismatches. Numerical simulations are provided to compare the proposed approaches with the existing SBL algorithm for the MMV problem. 
    more » « less
    Free, publicly-accessible full text available April 6, 2026
  3. Free, publicly-accessible full text available April 6, 2026
  4. Sparse Bayesian Learning (SBL) is a popular sparse signal recovery method, and various algorithms exist under the SBL paradigm. In this paper, we introduce a novel re-parameterization that allows the iterations of existing algorithms to be viewed as special cases of a unified and general mapping function. Furthermore, the re-parameterization enables an interesting beamforming interpretation that lends insights to all the considered algorithms. Utilizing the abstraction allowed by the general mapping viewpoint, we introduce a novel neural network architecture for learning improved iterative update rules under the SBL framework. Our modular design of the architecture enables the model to be independent of the size of the measurement matrix and provides us a unique opportunity to test the generalization capabilities across different measurement matrices. We show that the network when trained on a particular parameterized dictionary generalizes in many ways hitherto not possible; different measurement matrices, both type and dimension, and number of snapshots. Our numerical results showcase the generalization capability of our network in terms of mean square error and probability of support recovery across sparsity levels, different signal-to-noise ratios, number of snapshots and multiple measurement matrices of different sizes. 
    more » « less
    Free, publicly-accessible full text available April 6, 2026
  5. Free, publicly-accessible full text available April 6, 2026
  6. Localizing more sources than sensors with a sparse linear array (SLA) has long relied on minimizing a distance between two covariance matrices and recent algorithms often utilize semidefinite programming (SDP). Although deep neural network (DNN)-based methods offer new alternatives, they still depend on covariance matrix fitting. In this paper, we develop a novel methodology that estimates the co-array subspaces from a sample covariance for SLAs. Our methodology trains a DNN to learn signal and noise subspace representations that are invariant to the selection of bases. To learn such representations, we propose loss functions that gauge the separation between the desired and the estimated subspace. In particular, we propose losses that measure the length of the shortest path between subspaces viewed on a union of Grassmannians, and prove that it is possible for a DNN to approximate signal subspaces. The computation of learning subspaces of different dimensions is accelerated by a new batch sampling strategy called consistent rank sampling. The methodology is robust to array imperfections due to its geometry-agnostic and data-driven nature. In addition, we propose a fully end-to-end gridless approach that directly learns angles to study the possibility of bypassing subspace methods. Numerical results show that learning such subspace representations is more beneficial than learning covariances or angles. It outperforms conventional SDP-based methods such as the sparse and parametric approach (SPA) and existing DNN-based covariance reconstruction methods for a wide range of signal-to-noise ratios (SNRs), snapshots, and source numbers for both perfect and imperfect arrays. 
    more » « less
    Free, publicly-accessible full text available January 1, 2026
  7. This paper addresses the high overheads associated with intelligent reflecting surface (IRS) aided wireless systems. By exploiting the inherent spatial correlation among the IRS elements, we propose a novel approach that randomly samples the IRS phase configurations from a carefully designed distribution and opportunistically schedules the user equipments (UEs) for data transmission. The key idea is that when IRS configuration is randomly chosen from a channel statistics-aware distribution, it will be near-optimal for at least one UE, and upon opportunistically scheduling that UE, we can obtain nearly all the benefits from the IRS without explicitly optimizing it. We formulate and solve a variational functional problem to derive the optimal phase sampling distribution. We show that, when the IRS phase configuration is drawn from the optimized distribution, it is sufficient for the number of UEs to scale exponentially with the rank of the channel covariance matrix, not with the number of IRS elements, to achieve a given target SNR with high probability. Our numerical studies reveal that even with a moderate number of UEs, the opportunistic scheme achieves near-optimal performance without incurring the conventional IRS-related signaling overheads and complexities. 
    more » « less
    Free, publicly-accessible full text available January 1, 2026
  8. This paper addresses the mitigation of spatial-wideband (SW) and the resulting beam-split (B-SP) effects in intelligent reflecting surface (IRS)-aided wideband systems. The SW effect occurs when the signal delay across the IRS aperture exceeds the system’s sampling duration, causing the user equipment’s (UE) channel angle to vary with frequency. This leads to the B-SP effect, wherein the IRS cannot coherently beamform to a given UE over the entire bandwidth, reducing array gain and throughput. We first show that partitioning a single IRS into multiple smaller IRSs and distributing them in the environment can naturally mitigate the SW effect (and hence the B-SP effect) by parallelizing the spatial delays and exploiting angle diversity benefits. Next, by determining the maximum number of elements at each smaller IRS to limit B-SP effects and analyzing the achievable sum-rate, we demonstrate that our approach ensures a minimum positive rate over the entire bandwidth of operation. However, distributed IRSs may introduce temporal delay spread (TDS) due to the differences in the path lengths through the IRSs and this may reduce the achievable flat channel gain. To minimize TDS and maintain the full array gain, we show that the optimal placement of the IRSs is on an ellipse with the base station (BS) and UE as the focal points. We also analyze the impact of the optimal IRS placement on TDS and throughput for a UE that is located within a hotspot served by the IRSs. Finally, we illustrate that distributed IRSs enhance angle diversity, which exponentially reduces the outage probability due to B-SP effects as the number of IRSs increases. Numerical results validate the efficacy and simplicity of our method compared to the existing solutions. 
    more » « less
    Free, publicly-accessible full text available January 1, 2026
  9. This letter addresses the problem of estimating block sparse signal with unknown group partitions in a multiple measurement vector (MMV) setup. We propose a Bayesian framework by applying an adaptive total variation (TV) penalty on the hyper-parameter space of the sparse signal. The main contributions are two-fold. 1) We extend the TV penalty beyond the immediate neighbor, thus enabling better capture of the signal structure. 2) A dynamic framework is provided to learn the regularization weights for the TV penalty based on the statistical dependencies between the entries of tentative blocks, thus eliminating the need for fine-tuning. The superior performance of the proposed method is empirically demonstrated by extensive computer simulations with the state-of-art benchmarks. The proposed solution exhibits both excellent performance and robustness against sparsity model mismatch. 
    more » « less
    Free, publicly-accessible full text available January 1, 2026
  10. We examine the problem of uplink cell-free access point (AP) placement in the context of optimal throughput. In this regard, we formulate two main placement problems, namely the sum rate and minimum rate maximization problems, and discuss the challenges associated with solving the underlying optimization problems with the help of some simple scenarios. As a practical solution to the AP placement problem, we suggest a vector quantization (VQ) approach. The suitability of the VQ approach to cell-free AP placement is investigated by examining three VQ-based solutions. First, the standard VQ approach, that is the Lloyd algorithm (using the squared error distortion function) is described. Second, the tree-structured VQ (TSVQ), which performs successive partitioning of the distribution space is applied. Third, a probability density function optimized VQ (PDFVQ) procedure is outlined, enabling efficient, low complexity, and scalable placement, and is aimed at a massive distributed multiple-input-multiple-output scenario. While the VQ-based solutions do not explicitly solve the cell-free AP placement problems, numerical experiments show that their sum and minimum rate performances are good enough, and offer a good starting point for gradient-based optimization methods. Among the VQ solutions, PDFVQ, with its distinct advantages, offers a good trade-off between sum and minimum rates. 
    more » « less